Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Cross-modal retrieval algorithm based on multi-level semantic discriminative guided hashing
LIU Fangming, ZHANG Hong
Journal of Computer Applications    2021, 41 (8): 2187-2192.   DOI: 10.11772/j.issn.1001-9081.2020101607
Abstract317)      PDF (1091KB)(432)       Save
Most cross-modal hashing methods use binary matrix to represent the degree of correlation, which results in high-level semantic information cannot be captured in multi-label data, and those methods ignore maintaining the semantic structure and the discrimination of the data features. Therefore, a cross-modal retrieval algorithm named ML-SDH (Multi-Level Semantics Discriminative guided Hashing) was proposed. In the algorithm, multi-level semantic similarity matrix was used to discover the deeply correlated information in the cross-modal data, and equally guided cross-modal hashing was used to express the correlations in the semantic structure and discriminative classification. As the result, not only the purpose of encoding multi-label data of high-level semantic information was achieved, but also the distinguishability and semantic similarity of the final learned hash codes were ensured by the constructed multi-level semantic structure. On NUS-WIDE dataset, with the hash code length of 32 bit, the mean Average Precision (mAP) of the proposed algorithm in two retrieval tasks is 19.48,14.50,1.95 percentage points and 16.32,11.82,2.08 percentage points higher than those of DCMH (Deep Cross-Modal Hashing), PRDH (Pairwise Relationship guided Deep Hashing) and EGDH (Equally-Guided Discriminative Hashing) algorithms respectively.
Reference | Related Articles | Metrics
Dynamic recommendation algorithm for group-users' temporal behaviors
WEN Wen, LIU Fang, CAI Ruichu, HAO Zhifeng
Journal of Computer Applications    2021, 41 (1): 60-66.   DOI: 10.11772/j.issn.1001-9081.2020061010
Abstract334)      PDF (1014KB)(520)       Save
Focusing on the issue that the user preferences change with time in the real system, and a user ID may be shared by multiple members of a family, a dynamic recommendation algorithm for the group-users who contained multiple types of members and have preferences varying with time was proposed. Firstly, it was assumed that the user's historical behavior data were composed of exposure data and click data, and the current member role was discriminated by learning the role weights of all types of members of the group-user at the present moment. Secondly, two design ideas were proposed according to the exposure data to construct a popularity model, and the training data were balanced by adopting the inverse propensity score weighting. Finally, the matrix factorization technique was used to obtain the user latent preference factor varying with time and the item latent attribute factor, and the inner products of the former and the latter were calculated to obtain the Top- K preference recommendations of the user which vary with time. Experimental results show that the proposed algorithm not only outperforms the benchmark method at least 16 moments in 24 moments a day on three metrics of Recall, Mean Average Precision (MAP), and Normalized Discounted Cumulative Gain (NDCG), but also shortens the running time and reduces the time complexity of calculation.
Reference | Related Articles | Metrics
Deep in vivo quantitative photoacoustic imaging based on improved fixed point iterative method
LIU Fangyan, MENG Jing, SI Guangtao
Journal of Computer Applications    2019, 39 (10): 3093-3099.   DOI: 10.11772/j.issn.1001-9081.2019010076
Abstract345)      PDF (1116KB)(243)       Save
Focusing on the reconstruction artifact of photoacoustic images in restricted view, an improved fixed-point iterative quantitative photoacoustic imaging method was proposed. Firstly, the original photoacoustic pressure data detected by the detector were reconstructed by the traditional back projection reconstruction algorithm to obtain the original photoacoustic pressure image. Secondly, the original photoacoustic pressure image was filtered to remove the reconstruction artifact by adaptive Wiener filtering algorithm. Thirdly, the optical transmission model was used to solve the optical flux of the target imaging region. And finally, iterative calculation was performed to obtain the optical absorption coefficient of the target tissue. In addition, Toast++ software was introduced in the process of solving the optical flux to realize the forward solution of the optical transmission model, which improved the efficiency and accuracy of quantitative imaging. The phantom and in vivo experiments show that compared with the traditional fixed-point iterative method, the proposed method can obtain photoacoustic images with higher quality and there are fewer artifacts in the deep quantitative photoacoustic images reconstructed by the method. The optical absorption coefficient of the quantitatively reconstructed deep target tissue is very close to the optical absorption coefficient of the shallow target tissue, the former is about 70% of the latter. As a result, the quantitative reconstruction of the optical absorption coefficient of the deep biological tissue can be implemented by the proposed method.
Reference | Related Articles | Metrics
Parallel high utility pattern mining algorithm based on cluster partition
XING Shuning, LIU Fang'ai, ZHAO Xiaohui
Journal of Computer Applications    2016, 36 (8): 2202-2206.   DOI: 10.11772/j.issn.1001-9081.2016.08.2202
Abstract492)      PDF (844KB)(348)       Save
The exiting algorithms generate a lot of utility pattern trees based on memory when mining high utility patterns in large-scale database, leading to occupying more memory spaces and losing some high utility itemsets. Using Hadoop platform, a parallel high utility pattern mining algorithm, named PUCP, based on cluster partition was proposed. Firstly, the clustering method was introduced to divide the transaction database into several sub-datasets. Secondly, sub-datasets were allocated to each node of Hadoop to construct utility pattern tree. Finally, the conditional pattern bases of the same item which generated from utility pattern trees were allocated to the same node, reducing the crossover operation times of each node. The theoretical analysis and experimental results show that, compared with the mainstream serial high utility pattern mining algorithm named UP-Growth (Utility Pattern Growth) and parallel high utility pattern mining algorithm named HUI-Growth (Parallel mining High Utility Itemsets by pattern-Growth), the mining efficiency of PUCP is increased by 61.2% and 16.6% respectively without affecting the reliability of the mining results; and the memory pressure of large data mining can be effectively relieved by using Hadoop platform.
Reference | Related Articles | Metrics
Collaborative filtering algorithm based on Bhattacharyya coefficient and Jaccard coefficient
YANG Jiahui, LIU Fangai
Journal of Computer Applications    2016, 36 (7): 2006-2010.   DOI: 10.11772/j.issn.1001-9081.2016.07.2006
Abstract638)      PDF (729KB)(396)       Save
The traditional collaborative filtering recommendation algorithm based on neighborhood has problems of data sparsity and similarity measures only utilizing ratings of co-rated items, so a Collaborative Filtering algorithm based on Bhattacharyya coefficient and Jaccard coefficient (CFBJ) was proposed. The similarity was measured by introducing Bhattacharyya coefficient and Jaccard coefficient. Bhattacharyya coefficient could utilize all ratings made by a pair of users to get rid of common rating restrictions. Jaccard coefficient could increase the proportion of common items in similarity measurement. The nearest neighborhood was selected by improving the accuracy of item similarity and the preference prediction and personalized recommendation of the active users were optimized. The experimental results show that the proposed algorithm has smaller error and higher classification accuracy than algorithms of Mean Jaccard Difference (MJD), Pearson Correlation (PC), Jaccard and Mean Squared Different (JMSD) and PIP (Proximity-Impact-Popularity). It effectively alleviates the data sparsity problem and enhances the accuracy of recommendation system.
Reference | Related Articles | Metrics
Improved algorithm for mining collaborative frequent itemsets in multiple data streams
WANG Xin, LIU Fang'ai
Journal of Computer Applications    2016, 36 (7): 1988-1992.   DOI: 10.11772/j.issn.1001-9081.2016.07.1988
Abstract437)      PDF (769KB)(396)       Save
In view of low memory usage rate and inefficient discovery for mining frequent itemsets in multiple data streams, an improved Mining Collaborative frequent itemsets in Multiple Data Stream (MCMD-Stream) algorithm was proposed. Firstly, the window sliding based on bit-sequence technique was utilized, which was a single-pass algorithm to find the potential and frequent itemsets. Secondly, Compressed frequent Pattern Tree (CP-Tree), which is similar to Frequent Pattern Tree (FP-Tree), was constructed to store the potential and frequent itemsets. And each node in the CP-Tree could generate the logarithmic tilted window to save the counts of frequent itemsets. Finally, the valuable frequent itemsets that appeared repeatedly in multiple data streams, namely collaborative frequent itemsets, were got. Compared to A-Stream and H-Stream algorithms, MCMD-Stream algorithm can improve the mining efficiency of collaborative frequent itemsets in multiple data streams, and also reduce the usage of the memory space. The experimental results show that MCMD-Stream algorithm can efficiently be applied to mine the collaborative frequent itemsets in multiple data streams.
Reference | Related Articles | Metrics
Optimized clustering algorithm based on density of hierarchical division
PANG Lin, LIU Fang'ai
Journal of Computer Applications    2016, 36 (6): 1634-1638.   DOI: 10.11772/j.issn.1001-9081.2016.06.1634
Abstract501)      PDF (731KB)(409)       Save
The traditional clustering algorithms cluster the dataset repeatedly, and have poor computational efficiency on large datasets. In order to solve the problem, a novel algorithm based on hierarchy partition was proposed to determine the optimal number of clusters and initial centers of clusters, named Clusters Optimization based on Density of Hierarchical Division (CODHD). Based on hierarchical division, the computational process was studied, which did not need to cluster datasets repeatedly. First of all, all statistical values of clustering features were obtained by scanning dataset. Secondly, the data partitions of different level were generated from bottom-to-up, the density of each partition data point was calculated, and the maximum density point of each partition was taken as the initial center. At the same time, the minimum distance from the center to the higher density data point was calculated, the average of products' sum of the density of the center and the minimum distance was taken as the validity index and a clustering quality curve of different hierarchical division was built incrementally. Finally, the optimal number of clusters and the initial center of clusters were estimated corresponding to the partition of extreme points of curve. The experimental results demonstrate that, compared with Clusters Optimization on Preprocessing Stage (COPS), the proposed CODHD improved clustering accuracy by 30% and clustering algorithm efficiency at least 14.24%. The proposed algorithm has strong feasibility and practicability.
Reference | Related Articles | Metrics
Single image super-resolution via independently adjustable sparse coefficients
NI Hao, RUAN Ruolin, LIU Fanghua, WANG Jianfeng
Journal of Computer Applications    2016, 36 (4): 1096-1099.   DOI: 10.11772/j.issn.1001-9081.2016.04.1096
Abstract541)      PDF (849KB)(404)       Save
The recovered image from the example-based super-resolution has sharp edges, but there are obvious artifacts. An improved super-resolution algorithm with independently adjustable sparse coefficients was proposed to eliminate the artifacts. In the dictionary training phase, the sparse coefficients in the high-dimensional space and the low-dimensional space of the image are different because of the known high-resolution training images and low-resolution ones. So the accurate high-resolution dictionary and the low-resolution one were generated separately via online dictionary learning algorithm. In the image reconstruction phase, the sparse coefficients in the two spaces were approximately the same because the input low-resolution image was known but the target high-resolution image was unknown. Different regularization parameters in the two phases were set to tune the corresponding sparse coefficients independently to get the best super-resolution results. According to the experiment results, the Peak Signal-to-Noise Ratio (PSNR) of the proposed method is 0.45 dB higher than that of sparse coding super-resolution in average, while the Structural SIMilarity (SSIM) is also 0.011 higher. The proposed algorithm eliminates the artifacts as well as recovers the edge sharpness and texture details effectively to promote the super-resolution results.
Reference | Related Articles | Metrics
Computing global unbalanced degree of signed networks based on culture algorithm
ZHAO Xiaohui, LIU Fang'ai
Journal of Computer Applications    2016, 36 (12): 3341-3346.   DOI: 10.11772/j.issn.1001-9081.2016.12.3341
Abstract492)      PDF (864KB)(397)       Save
Many approaches which are developed to compute structural balance degree of signed networks only focus on the balance information of local network without considering the balance of network in larger scale and even from the whole viewpoint, which can't discover unbalanced links in the network. In order to solve the problem, a method of computing global unbalanced degree of signed networks based on culture algorithm was proposed. The computation of unbalanced degree was converted to an optimization problem by using the Ising spin glass model to describe the global state of signed network. A new cultural algorithm with double evolution structures named Culture Algorithm for Signed Network Balance (CA-SNB) was presented to solve the optimization problem. Firstly, the genetic algorithm was used to optimize the population space. Secondly, the better individuals were recorded in belief space and the situation knowledges were summarized by using greedy strategy. Finally, the situation knowledge was used to guide population space evolution. The convergence rate of CA-SNB was improved on the basis of population diversity. The experimental results show that, the CA-SNB can converge to the optimal solution faster and can be more robust than genetic algorithm and matrix transformation algorithm. The proposed algorithm can compute the global unbalanced degree and discover unbalanced links at the same time.
Reference | Related Articles | Metrics
Membership of mixed dependency set in strong partial ordered temporal scheme
WAN Jing, LIU Fang
Journal of Computer Applications    2015, 35 (8): 2345-2349.   DOI: 10.11772/j.issn.1001-9081.2015.08.2345
Abstract449)      PDF (919KB)(340)       Save

The solution of membership problem is essential to design an available algorithm of scheme decomposition. Because of the partial order among temporal types in strong partial ordered temporal scheme, it is difficult to solve its membership problem. The concepts of mixed dependency base on given temporal type, mixed dependency base in strong partial ordered scheme, mixed set closure of partial temporal functional dependency and temporal multi-valued dependency and mixed closure of strong partial ordered scheme were given. The algorithms of dependency base of attribution and closure of attribution sets were also given. On this basis, the algorithm of membership problem of mixed dependency set in strong partial ordered scheme was put forward. The proof for its termination, correction and time complexity were presented. Application examples show that the research on related theory and algorithm solves determination of the membership problem in strong partial ordered mixed dependencies, and provides a theoretical basis for solving the strong partial order temporal scheme and the design of temporal database standardization.

Reference | Related Articles | Metrics
Efficient mining algorithm for uncertain data in probabilistic frequent itemsets
LIU Haoran, LIU Fang'ai, LI Xu, WANG Jiwei
Journal of Computer Applications    2015, 35 (6): 1757-1761.   DOI: 10.11772/j.issn.1001-9081.2015.06.1757
Abstract477)      PDF (911KB)(462)       Save

When using the way of pattern growth to construct tree structure, the exiting algorithms for mining probabilistic frequent itemsets suffer many problems, such as generating large number of tree nodes, occupying large memory space and having low efficiency. In order to solve these problems, a Progressive Uncertain Frequent Pattern Growth algorithm named PUFP-Growth was proposed. By the way of reading data in the uncertain database tuple by tuple, the proposed algorithm constructed tree structure as compact as Frequent Pattern Tree (FP-Tree) and updated dynamic array of expected value whose header table saved the same itemsets. When all transactions were inserted into the Progressive Uncertain Frequent Pattern tree (PUFP-Tree), all the probabilistic frequent itemsets could be mined by traversing the dynamic array. The experimental results and theoretical analysis show that PUFP-Growth algorithm can find the probabilistic frequent itemsets effectively. Compared with the Uncertain Frequent pattern Growth (UF-Growth) algorithm and Compressed Uncertain Frequent-Pattern Mine (CUFP-Mine) algorithm, the proposed PUFP-Growth algorithm can improve mining efficiency of probabilistic frequent itemsets on uncertain dataset and reduce memory usage to a certain degree.

Reference | Related Articles | Metrics
Middleware design for high-speed railway integrated dispatching system based on SCA and SDO
LUO Qiang WANG Qian LIU Fanglin FAN Ruijuan
Journal of Computer Applications    2013, 33 (06): 1654-1669.   DOI: 10.3724/SP.J.1087.2013.01654
Abstract734)      PDF (623KB)(638)       Save
In order to solve the system integration problems of high-speed railway integrated dispatching system in highly-distributed, highly heterogeneous environment, system integration framework based on Service Oriented Architecture (SOA) was proposed. The high-speed railway integrated dispatching system structure and its distributed SOA application were constructed based on Service Component Architecture (SCA) and Service Data Object (SDO) technology. The integration of power dispatching subsystem and other scheduling subsystems was achieved based on SCA and SDO technology on Java EE platform. The method fully embodies the openness and cross-platform features of SOA, and it is easy to implement.
Reference | Related Articles | Metrics
Clustering algorithm based on backup path in wireless sensor network
DING Ding LIU Fang-ai LI Qian-qian YANG Guang-xu
Journal of Computer Applications    2012, 32 (04): 920-923.   DOI: 10.3724/SP.J.1087.2012.00920
Abstract1063)      PDF (599KB)(454)       Save
Clustering can be used in the routing algorithm to enhance the scalability of Wireless Sensor Network (WSN). Concerning the defects of traditional clustering algorithm, a new strategy EDC (Energy-efficient, Dual-path, Clustering) was proposed, in which the member node has an optimal backup path. The strategy guaranteed that member node can still transmit data through its backup path when its cluster head was dying in the WSN. The results of the simulation experiment on the platform OMNeT ++ indicate that EDC performs much better than other protocols of WSN in terms of network reconstruction time and number of failed nodes.
Reference | Related Articles | Metrics
Nonlinear combinatorial collaborative filtering recommendation algorithm
LI Guo ZHANG Zhi-bin LIU Fang-xian JIANG Bo YAO Wen-wei
Journal of Computer Applications    2011, 31 (11): 3063-3067.   DOI: 10.3724/SP.J.1087.2011.03063
Abstract1121)      PDF (814KB)(598)       Save
Collaborative filtering is the most popular personalized recommendation technology at present. However, the existing algorithms are limited to the user-item rating matrix, which suffers from sparsity and cold-start problems. Neighbours' similarity only considers the items which users evaluate together, but ignores the correlation of item attribute and user characteristic. In addition, the traditional ones have taken users' interests in different time into equal consideration. As a result, they lack real-time nature. Concerning the above problems, this paper proposed a nonlinear combinatorial collaborative filtering algorithm consequently. In order to obtain more accurate nearest neighbour sets, it improved neighbours' similarity calculated approach based on item attribute and user characteristic respectively. Furthermore, the initial prediction rating fills in the rating matrix, so makes it much denser. Lastly, it added time weight to the final prediction rating, so then let users' latest interests take the biggest weight. The experimental results show that the optimized algorithm can increase prediction precision, by way of reducing sparsity and cold-start problems, and realizing real-time recommendation effectively.
Related Articles | Metrics
Computation of radar cross section in high frequency region for SAR imaging simulation of ship targets
SUN Yu-Kang WANG Run-Sheng LIU Fang QI Bin
Journal of Computer Applications   
Abstract1864)      PDF (710KB)(1237)       Save
To compute Radar Cross Section (RCS) in high frequency region, a method that combined with fast modeling and improved graphic electromagnetic computing was presented. Using the collecting information of ship and the modeling software, the figure of the ship was modeled accurately, the radar cross section was computed exactly by improved graphic electromagnetic computing algorithms integrated Physical Optics (PO) method and Incremental Length Diffraction Coefficients (ILDC) method, simulation experiment shows that it can get a good simulation result of SAR images by this nearly real-time method.
Related Articles | Metrics
Research on economy-based optimisation of admission policy on a grid cache
SONG Feng-long,LIU Fang-ai
Journal of Computer Applications    2005, 25 (12): 2919-2920.  
Abstract1811)      PDF (601KB)(1104)       Save
To relax the latency and maximise the income of local storage broker,the economic model was used to optimizing the admission policy on grid cache,and an effective admission policy through computing the value difference when caching an object was proposed,which can solve the income conflict.Test results proved the effectivity of the policy.
Related Articles | Metrics